Day 16: Prompt Engineering
Even with the same model, the quality of results can change dramatically depending on how you write the prompt. Today we cover the core prompt techniques and practical tips.
Role Assignment and Zero-shot / Few-shot
from openai import OpenAI
client = OpenAI() # OPENAI_API_KEY environment variable required
# Zero-shot: Ask directly without examples
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a sentiment analysis expert. Answer only with positive/negative/neutral."},
{"role": "user", "content": "The shipping was fast but the quality was disappointing."},
],
)
print("Zero-shot:", response.choices[0].message.content)
# Few-shot: Provide examples to guide pattern learning
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Analyze the sentiment."},
{"role": "user", "content": "The food is delicious. -> positive\nThe service is slow. -> negative\nIt's just ordinary. -> neutral\n\nThe shipping was fast but the quality was disappointing. ->"},
],
)
print("Few-shot:", response.choices[0].message.content)
Chain-of-Thought
For complex reasoning problems, guiding step-by-step thinking significantly improves accuracy. A simple phrase like “Think step by step” can be very effective.
# Normal prompt vs CoT prompt comparison
normal_prompt = "A store had 23 apples. 11 were sold and 6 more were brought in. How many apples are left?"
cot_prompt = """A store had 23 apples. 11 were sold and 6 more were brought in. How many apples are left?
Solve step by step:
Step 1: Check the initial number of apples.
Step 2: Subtract the number sold.
Step 3: Add the newly brought-in apples.
Step 4: Calculate the final answer."""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": cot_prompt}],
)
print(response.choices[0].message.content)
Structured Output (JSON Mode)
To programmatically process LLM output, a structured format like JSON is essential.
import json
response = client.chat.completions.create(
model="gpt-4o-mini",
response_format={"type": "json_object"},
messages=[
{"role": "system", "content": "Respond only in JSON format."},
{"role": "user", "content": """Extract information from the following review:
"I've been using the Samsung Galaxy S24 for 2 weeks. The camera is excellent but the battery is disappointing. It cost $999."
Format: {"product": "", "duration": "", "pros": [], "cons": [], "price": ""}"""},
],
)
result = json.loads(response.choices[0].message.content)
print(json.dumps(result, ensure_ascii=False, indent=2))
10 Prompt Tips
- Assign a clear role - “You are a data scientist with 10 years of experience”
- Specify the output format - Explicitly state the desired format: JSON, table, numbered list, etc.
- Use delimiters - Separate inputs and instructions with
---or### - Use affirmative rather than negative - Instead of “Don’t do X”, say “Do Y”
- Show examples - Few-shot is almost always better than Zero-shot
- Break it into steps - For complex tasks: “First… Then… Finally…”
- State constraints explicitly - “Within 3 sentences”, “Under 200 characters”
- Set escape conditions - “If you don’t know, say ‘I don’t know’”
- Adjust the temperature - Low (0.1
0.3) for factual tasks, high (0.71.0) for creative tasks - Iterate and experiment - No prompt is perfect on the first try; improve gradually
Today’s Exercises
- Build a system that classifies Korean news headlines into 5 categories: “Politics/Economy/Society/IT/Sports” using a Few-shot prompt. Include 5 examples.
- Write a prompt using the Chain-of-Thought technique to solve a math problem (system of equations), and compare the accuracy with a version without CoT.
- Using JSON output mode, write a prompt that extracts name, contact info, education, and work experience from free-form resume text into structured JSON.